45 research outputs found

    Evolution of swarming behavior is shaped by how predators attack

    Full text link
    Animal grouping behaviors have been widely studied due to their implications for understanding social intelligence, collective cognition, and potential applications in engineering, artificial intelligence, and robotics. An important biological aspect of these studies is discerning which selection pressures favor the evolution of grouping behavior. In the past decade, researchers have begun using evolutionary computation to study the evolutionary effects of these selection pressures in predator-prey models. The selfish herd hypothesis states that concentrated groups arise because prey selfishly attempt to place their conspecifics between themselves and the predator, thus causing an endless cycle of movement toward the center of the group. Using an evolutionary model of a predator-prey system, we show that how predators attack is critical to the evolution of the selfish herd. Following this discovery, we show that density-dependent predation provides an abstraction of Hamilton's original formulation of ``domains of danger.'' Finally, we verify that density-dependent predation provides a sufficient selective advantage for prey to evolve the selfish herd in response to predation by coevolving predators. Thus, our work corroborates Hamilton's selfish herd hypothesis in a digital evolutionary model, refines the assumptions of the selfish herd hypothesis, and generalizes the domain of danger concept to density-dependent predation.Comment: 25 pages, 11 figures, 5 tables, including 2 Supplementary Figures. Version to appear in "Artificial Life

    PMLB: A Large Benchmark Suite for Machine Learning Evaluation and Comparison

    Full text link
    The selection, development, or comparison of machine learning methods in data mining can be a difficult task based on the target problem and goals of a particular study. Numerous publicly available real-world and simulated benchmark datasets have emerged from different sources, but their organization and adoption as standards have been inconsistent. As such, selecting and curating specific benchmarks remains an unnecessary burden on machine learning practitioners and data scientists. The present study introduces an accessible, curated, and developing public benchmark resource to facilitate identification of the strengths and weaknesses of different machine learning methodologies. We compare meta-features among the current set of benchmark datasets in this resource to characterize the diversity of available data. Finally, we apply a number of established machine learning methods to the entire benchmark suite and analyze how datasets and algorithms cluster in terms of performance. This work is an important first step towards understanding the limitations of popular benchmarking suites and developing a resource that connects existing benchmarking standards to more diverse and efficient standards in the future.Comment: 14 pages, 5 figures, submitted for review to JML
    corecore